====== HSTL standard IOBs ======

===== Notes =====
  * This is the very ** first time ** that the HSTL IOB standard is being tested. 
  * All ** FuncMon's error counters with full visibility: ** the system logs the whole 32-bit values.
  * All ** "low-speed" (2MHz) tests **.  
  * All ** un-registered (combinational) tests**.
  * The ** DUT outputs "grouping" has been inferred from the actual data,** as this does not correspond to the information documented in the provided (developer notes) documentation. The inferred groups so far are not completely accurate (not enough information available). 
  * We have detected ** strange fault modes,** as described in the following section. So far, not been able to confirm that these are real upset-induced ones, or false positives due to a test environment flaw. We have similar fault modes present in the IOSERDES tests. The other IOB tests done in 201703-TAMU have not enough error counter visibility to check this behavior (the system only logged each counter's LSByte).
  * There is a ** signal integrity (SI) flaw,** at least in each link: FuncMon(out)->(in)DUT. There are no on-board impedance-matching resistors in those lines. According to the DUT constraints file provided by Raytheon, the DUT inputs are using the DCI feature, but we have no on-board calibration resistors in the affected IOB banks.  
    * The following information was kindly provided by Austin. DCI won't work in that condition, as the maximum resistance value is set at ~2 k$\Omega$ when no calibration resistors are attached, so they become very weak drivers. It is about a 1 to 2 mA driver under these conditions, so if you are slow enough, it works.  Given 2 k$\Omega$ is too weak to cause overshoot or undershoot, it also 'solves' any PCB SI issues.
    * Given that FuncMon sends always the same bit pattern to the DUT, any corruption caused by impedance mismatch (wave reflections) will generate a steady-state waveform. This would produce a fault mode in disagreement with the observations (e.g., error counters slowing down for an interval of time).
    * We have pending lab testing to try to reproduce the observed patterns in an off-beam environment.  

\\
===== HSTL: Summary of strange features =====
----

In the following delta-count plot we have all the HSTL test runs. Time goes from top to bottom, and each column corresponds to a (32-bit) error counter (see abscissa labels). A darker blue corresponds to a higher delta-count value. Note that it has enough resolution, just click on it and zoom-in for a detailed view. \\
These are CRAM hit patterns, so we should expect the counters running always at the same (1/2 free-running) speed, with variations only corresponding to the variance on the counters values' sampling time (before being sent through serial port to the GUI/log). Instead, we have some anomalous delta-count variation zones as follows in the descriptions after the plot, corresponding to the features near the colored dot marks.

{{ data_analysis:v7-dynamic:201703-tamu:iob:hstl:testruns_iob--non-packaged_counters_runs--previous_to_processing_-features_marked.png }}

** 1. Transients of delta-count values ** (marked in color: red). \\
These are transients where the error count seems to run slower than it should. Two patterns observed, that probably correspond to the same failure mode.

  * ** 1.1. ** Sometimes, when a CRAM upset affecting an output is triggered, the first group of delta-counts values do rise slowly up to the correct order of values of the delta-counts corresponding to counters affected by a "broken" CRAM. We see two nearby outputs (in same group) with this behavior, one following the other within a few time ticks. The delta-counts rise time is ~ 275ms for both counters.

  * ** 1.2. ** For some counter showing the typical CRAM-upset pattern (delta-counts of error counters within their typical corresponding values), at some point in time it begins to be affected by this effect and the delta-counts will slowly decrease to much lower values than usual. We see this behavior in the four counters corresponding to a group. One thing to notice is that here the characteristic times seem to be not well defined as in case 1.1. Counter16 has 2 local minima, with delta=96 and delta=6, and Counter17 also has two minima, delta=64 and delta=124. In the other 2 counters, there's no such clear and deep minima values. Finally, in all 4 counters, they stay considerable amounts of time with "below 10000" delta values.


** 2. Error counter "suddenly" speeding-up ** (marked in color: yellow). \\
We see a few counters with a strong blue color. This is what we find in raw files: \\
-- Delta error counts larger, in a factor of ~2, than expected. \\
-- SEFI/Global events seem to "reset" the affected output to the typical range of delta values for a "broken" CRAM. \\
-- The one that appears in Counter63, is the only case where this "2x speed" behavior is triggered  while the counter was already running. \\
This is probably another configuration upset that would need to be accounted for; difficult to prove this, though.


** 3. Counter running slower than expected ** (marked in color: green). \\
In general, zooming-in the plot and comparing horizontally the values of the running counters, we find the delta-count values are approx. the same.
But in some cases like the one shown, there is a slight contrast of a counter vs. its nearby counters, now the "singular" one having a lighter color, that in fact represents scaled-down delta-counts values. In this case the scale factor seems to be ~ 0.85.
Also, the 3 counters to the left of the affected one, have delta-counts values in no such good agreement as we can find almost in every other CRAM upset pattern.


As a reference, below are the coordinates in raw data files, for these cases.
Remember that for these "low speed" tests, the "free-running" counters' delta values are in average ~ 26000, with a maximum of ~ 37000 and a minimum of ~ 14000 counts (for the latter, excluding the "boundary" upset transition samples).

^ CASE ^ RAW FILE ^                 Date/Time  ^             Counters  ^
| 1.1 |  hstl_rtn_unreg_104.LGF |   3/13/2017 1:32:55 AM |   0, 1  |
| 1.2 |  hstl_rtn_unreg_104.LGF |   3/13/2017 1:37:16 AM |   16,17,18,19  |
| 2   |  hstl_rtn_unreg_104.LGF |   3/13/2017 1:33:03 AM |   63  |
| 3   |  hstl_rtn_unreg_103.LGF |   3/13/2017 1:27:27 AM |   55  |


\\
==== UPDATE: Same pattern as case (2) appearing in 201612-TAMU tests ====
----

We have this pattern also in the [[data_analysis:v7-dynamic:201612-tamu:iob:0-raw | LVCMOS (3V3) tested on 201612-TAMU]]. 
The "2x" delta-count factor appears in three runs of 201612-TAMU, while significant amount of noise glitches prevents us to discover more of these pathological patterns. \\


\\
==== UPDATE: Similar pattern as case (2) appearing in IOSERDES tests ====
----
In the [[data_analysis:v7-dynamic:201703-tamu:ioserdes:0-raw | IOSERDES runs]], we find an "anomalous speed" factor of 0.5 and 1.5, so far for a few verified events. 


\\
===== HSTL: Individual runs =====
----
Plots obtained after processing the HSTL runs.


\\
** The following plot corresponds to test run 100: ** \\
{{ data_analysis:v7-dynamic:201703-tamu:iob:hstl:testrun100.png }}


\\
** The following plot corresponds to test run 101: ** \\
{{ data_analysis:v7-dynamic:201703-tamu:iob:hstl:testrun101.png }}


\\
** The following plot corresponds to test run 102: ** \\
{{ data_analysis:v7-dynamic:201703-tamu:iob:hstl:testrun102.png }}


\\
** The following plot corresponds to test run 103: ** \\
{{ data_analysis:v7-dynamic:201703-tamu:iob:hstl:testrun103.png }}
